Content-Controllable Summarization generates summaries focused on the given controlling signals. Due to the lack of large-scale training corpora for the task, we propose a plug-and-play module RelAttn to adapt any general summarizers to the content-controllable summarization task. RelAttn first identifies the relevant content in the source documents, and then makes the model attend to the right context by directly steering the attention weight. We further apply an unsupervised online adaptive parameter searching algorithm to determine the degree of control in the zero-shot setting, while such parameters are learned in the few-shot setting. By applying the module to three backbone summarization models, experiments show that our method effectively improves all the summarizers, and outperforms the prefix-based method and a widely used plug-and-play model in both zero- and few-shot settings. Tellingly, more benefit is observed in the scenarios when more control is needed.
translated by 谷歌翻译
We propose a sparse end-to-end multi-person pose regression framework, termed QueryPose, which can directly predict multi-person keypoint sequences from the input image. The existing end-to-end methods rely on dense representations to preserve the spatial detail and structure for precise keypoint localization. However, the dense paradigm introduces complex and redundant post-processes during inference. In our framework, each human instance is encoded by several learnable spatial-aware part-level queries associated with an instance-level query. First, we propose the Spatial Part Embedding Generation Module (SPEGM) that considers the local spatial attention mechanism to generate several spatial-sensitive part embeddings, which contain spatial details and structural information for enhancing the part-level queries. Second, we introduce the Selective Iteration Module (SIM) to adaptively update the sparse part-level queries via the generated spatial-sensitive part embeddings stage-by-stage. Based on the two proposed modules, the part-level queries are able to fully encode the spatial details and structural information for precise keypoint regression. With the bipartite matching, QueryPose avoids the hand-designed post-processes and surpasses the existing dense end-to-end methods with 73.6 AP on MS COCO mini-val set and 72.7 AP on CrowdPose test set. Code is available at https://github.com/buptxyb666/QueryPose.
translated by 谷歌翻译
The ability to understand and generate similes is an imperative step to realize human-level AI. However, there is still a considerable gap between machine intelligence and human cognition in similes, since deep models based on statistical distribution tend to favour high-frequency similes. Hence, a large-scale symbolic knowledge base of similes is required, as it contributes to the modeling of diverse yet unpopular similes while facilitating additional evaluation and reasoning. To bridge the gap, we propose a novel framework for large-scale simile knowledge base construction, as well as two probabilistic metrics which enable an improved understanding of simile phenomena in natural language. Overall, we construct MAPS-KB, a million-scale probabilistic simile knowledge base, covering 4.3 million triplets over 0.4 million terms from 70 GB corpora. We conduct sufficient experiments to justify the effectiveness and necessity of the methods of our framework. We also apply MAPS-KB on three downstream tasks to achieve state-of-the-art performance, further demonstrating the value of MAPS-KB.
translated by 谷歌翻译
Attention-based arbitrary style transfer studies have shown promising performance in synthesizing vivid local style details. They typically use the all-to-all attention mechanism: each position of content features is fully matched to all positions of style features. However, all-to-all attention tends to generate distorted style patterns and has quadratic complexity. It virtually limits both the effectiveness and efficiency of arbitrary style transfer. In this paper, we rethink what kind of attention mechanism is more appropriate for arbitrary style transfer. Our answer is a novel all-to-key attention mechanism: each position of content features is matched to key positions of style features. Specifically, it integrates two newly proposed attention forms: distributed and progressive attention. Distributed attention assigns attention to multiple key positions; Progressive attention pays attention from coarse to fine. All-to-key attention promotes the matching of diverse and reasonable style patterns and has linear complexity. The resultant module, dubbed StyA2K, has fine properties in rendering reasonable style textures and maintaining consistent local structure. Qualitative and quantitative experiments demonstrate that our method achieves superior results than state-of-the-art approaches.
translated by 谷歌翻译
Among current anchor-based detectors, a positive anchor box will be intuitively assigned to the object that overlaps it the most. The assigned label to each anchor will directly determine the optimization direction of the corresponding prediction box, including the direction of box regression and category prediction. In our practice of crowded object detection, however, the results show that a positive anchor does not always regress toward the object that overlaps it the most when multiple objects overlap. We name it anchor drift. The anchor drift reflects that the anchor-object matching relation, which is determined by the degree of overlap between anchors and objects, is not always optimal. Conflicts between the fixed matching relation and learned experience in the past training process may cause ambiguous predictions and thus raise the false-positive rate. In this paper, a simple but efficient adaptive two-stage anchor assignment (TSAA) method is proposed. It utilizes the final prediction boxes rather than the fixed anchors to calculate the overlap degree with objects to determine which object to regress for each anchor. The participation of the prediction box makes the anchor-object assignment mechanism adaptive. Extensive experiments are conducted on three classic detectors RetinaNet, Faster-RCNN and YOLOv3 on CrowdHuman and COCO to evaluate the effectiveness of TSAA. The results show that TSAA can significantly improve the detectors' performance without additional computational costs or network structure changes.
translated by 谷歌翻译
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models. Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency. Particularly, backpropagation through time (BPTT) with surrogate gradients (SG) is popularly used to achieve high performance in a very small number of time steps. However, it is at the cost of large memory consumption for training, lack of theoretical clarity for optimization, and inconsistency with the online property of biological learning and rules on neuromorphic hardware. Other works connect spike representations of SNNs with equivalent artificial neural network formulation and train SNNs by gradients from equivalent mappings to ensure descent directions. But they fail to achieve low latency and are also not online. In this work, we propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning by tracking presynaptic activities and leveraging instantaneous loss and gradients. Meanwhile, we theoretically analyze and prove that gradients of OTTT can provide a similar descent direction for optimization as gradients based on spike representations under both feedforward and recurrent conditions. OTTT only requires constant training memory costs agnostic to time steps, avoiding the significant memory costs of BPTT for GPU training. Furthermore, the update rule of OTTT is in the form of three-factor Hebbian learning, which could pave a path for online on-chip learning. With OTTT, it is the first time that two mainstream supervised SNN training methods, BPTT with SG and spike representation-based training, are connected, and meanwhile in a biologically plausible form. Experiments on CIFAR-10, CIFAR-100, ImageNet, and CIFAR10-DVS demonstrate the superior performance of our method on large-scale static and neuromorphic datasets in small time steps.
translated by 谷歌翻译
大规模发光点云的快速有效语义分割是自主驾驶中的一个基本问题。为了实现这一目标,现有的基于点的方法主要选择采用随机抽样策略来处理大规模点云。但是,我们的数量和定性研究发现,随机抽样可能不适合自主驾驶场景,因为LiDAR点遵循整个空间的不均匀甚至长尾巴分布,这阻止了模型从从中捕获足够的信息,从而从中捕获了足够的信息不同的距离范围并降低了模型的学习能力。为了减轻这个问题,我们提出了一种新的极性缸平衡的随机抽样方法,该方法使下采样的点云能够保持更平衡的分布并改善不同空间分布下的分割性能。此外,引入了采样一致性损失,以进一步提高分割性能并降低模型在不同采样方法下的方差。广泛的实验证实,我们的方法在Semantickitti和Semanticposs基准测试中都产生了出色的性能,分别提高了2.8%和4.0%。
translated by 谷歌翻译
行业分配根据预定义的行业分类系统(ICS)将公司分配给行业,这对于大量关键业务实践至关重要,从公司的运营和战略决策到政府机构的经济分析。三种专家知识对于有效行业分配至关重要:基于定义的知识(即每个行业的专家定义),基于结构的知识(即ICS中指定的行业之间的结构关系)和基于任务的知识(即,域专家执行的事先公司行业任务)。现有的行业分配方法仅利用基于任务的知识来学习将未分配的公司分类为行业的模型,并忽略基于定义和基于结构的知识。此外,这些方法仅考虑已分配了公司的哪个行业,但忽略了基于分配的知识的时间特异性,即在任务发生时。为了解决现有方法的局限性,我们提出了一种新颖的基于深度学习的方法,该方法不仅无缝整合了三种类型的行业分配知识,而且还考虑了基于分配的知识的特定时间。从方法上讲,我们的方法具有两种创新:动态行业表示和分层分配。前者通过通过我们提出的时间和空间聚集机制整合了三种类型的知识,将行业代表为一系列特定时间的向量。后者将行业和公司的表现作为投入,计算将公司分配给不同行业的可能性,并将公司分配给具有最高概率的行业。
translated by 谷歌翻译
许多数据分析任务在很大程度上依赖对表的深入了解(多维数据)。在整个任务中,都存在表字段 /列的共同使用的元数据属性。在本文中,我们确定了四个这样的分析元数据:测量/维度二分法,公共场作用,语义场类型和默认聚集函数。尽管这些元数据面临不足的监督信号的挑战,利用现有的知识和理解分布。为了将这些元数据推理为原始表,我们提出了多任务元数据模型,该模型将现场分布和知识图信息融合到预训练的表格模型中。对于模型培训和评估,我们通过使用下游任务的各种智能监督来收集分析元数据的大型语料库(来自私人电子表格和公共表格数据集的〜582K表)。我们的最佳模型的精度= 98%,命中率在TOP-1> 67%,精度> 80%和四个分析元数据推理任务的精度= 88%。它的表现优于基于规则,传统机器学习方法和预训练的表格模型的一系列基线。分析元数据模型被部署在流行的数据分析产品中,帮助下游智能功能,例如Insights挖掘,图表 /枢轴表建议和自然语言QA ...
translated by 谷歌翻译
通过恢复(实体瘤的响应评估标准)自动测量病变/肿瘤大小,直径和分割对于计算机辅助诊断很重要。尽管近年来已经研究了它,但仍有空间可以提高其准确性和鲁棒性,例如(1)通过合并丰富的上下文信息来增强功能,同时保持高空间分辨率,(2)涉及新任务和损失以进行关节优化。为了实现这一目标,本文提出了一个基于变压器的网络(Meaformer,测量变压器),用于病变恢复直径预测和分割(LRDPS)。它被配制为三个相关和互补任务:病变分割,热图预测和关键点回归。据我们所知,这是首次使用按键重点回归进行恢复直径预测。 MeaeFormer可以通过使用变压器来捕获其远程依赖性来增强高分辨率功能。引入了两个一致性损失,以明确建立这些任务之间的关系,以更好地优化。实验表明,MeAformer实现了LRDP在大规模深层数据集上的最新性能,并在纵向研究中产生了两个下游诊所的任务,即3D病变细分和恢复评估。
translated by 谷歌翻译